A Unified Bias-Variance Decomposition for Zero-One and Squared Loss

نویسنده

  • Pedro M. Domingos
چکیده

The bias-variance decomposition is a very useful and widely-used tool for understanding machine-learning algorithms. It was originally developed for squared loss. In recent years, several authors have proposed decompositions for zero-one loss, but each has significant shortcomings. In particular, all of these decompositions have only an intuitive relationship to the original squared-loss one. In this paper, we define bias and variance for an arbitrary loss function, and show that the resulting decomposition specializes to the standard one for the squared-loss case, and to a close relative of Kong and Dietterich’s (1995) one for the zero-one case. The same decomposition also applies to variable misclassification costs. We show a number of interesting consequences of the unified definition. For example, Schapire et al.’s (1997) notion of “margin” can be expressed as a function of the zero-one bias and variance, making it possible to formally relate a classifier ensemble’s generalization error to the base learner’s bias and variance on training examples. Experiments with the unified definition lead to further insights.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Unified Bias-Variance Decomposition and its Applications

This paper presents a unified bias-variance decomposition that is applicable to squared loss, zero-one loss, variable misclassification costs, and other loss functions. The unified decomposition sheds light on a number of significant issues: the relation between some of the previously-proposed decompositions for zero-one loss and the original one for squared loss, the relation between bias, var...

متن کامل

Bias Plus Variance Decomposition for Zero-One Loss Functions

We present a bias variance decomposition of expected misclassi cation rate the most commonly used loss function in supervised classi cation learning The bias variance decomposition for quadratic loss functions is well known and serves as an important tool for analyzing learning algorithms yet no decomposition was o ered for the more commonly used zero one misclassi cation loss functions until t...

متن کامل

Machine Learning : Proceedings of the Thirteenth International Conference , 1996 Bias Plus Variance Decomposition forZero - One Loss

We present a bias-variance decomposition of expected misclassiication rate, the most commonly used loss function in supervised classiication learning. The bias-variance decomposition for quadratic loss functions is well known and serves as an important tool for analyzing learning algorithms, yet no decomposition was ooered for the more commonly used zero-one (misclassiication) loss functions un...

متن کامل

The Minimum Variance Unbiased Estimator

This module motivates and introduces the minimum variance unbiased estimator (MVUE). This is the primary criterion in the classical (frequentist) approach to parameter estimation. We introduce the concepts of mean squared error (MSE), variance, bias, unbiased estimators, and the bias-variance decomposition of the MSE. The Minimum Variance Unbiased Estimator 1 In Search of a Useful Criterion In ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2000